23 research outputs found

    Toward nonprobabilistic explanations of learning and decision-making

    Get PDF

    Quantifying Support for the Null Hypothesis in Psychology: An Empirical Investigation

    Get PDF
    In the traditional statistical framework, nonsignificant results leave researchers in a state of suspended disbelief. In this study, we examined, empirically, the treatment and evidential impact of nonsignificant results. Our specific goals were twofold: to explore how psychologists interpret and communicate nonsignificant results and to assess how much these results constitute evidence in favor of the null hypothesis. First, we examined all nonsignificant findings mentioned in the abstracts of the 2015 volumes of Psychonomic Bulletin & Review, Journal of Experimental Psychology: General, and Psychological Science (N = 137). In 72% of these cases, nonsignificant results were misinterpreted, in that the authors inferred that the effect was absent. Second, a Bayes factor reanalysis revealed that fewer than 5% of the nonsignificant findings provided strong evidence (i.e., BF01 > 10) in favor of the null hypothesis over the alternative hypothesis. We recommend that researchers expand their statistical tool kit in order to correctly interpret nonsignificant results and to be able to evaluate the evidence for and against the null hypothesis

    Data for "No evidence for automatic imitation in strategic context"

    No full text
    <p>Supplementary data for the outcomes of trials and the reaction times of participants in our experiment.</p

    The Effect of Transparency on Framing Effects in Within-subjects Designs

    No full text
    National audienceA long-lasting assumption about the framing effect is that if the participants discover the purpose of the experiment in a within-subject design, then this test transparency would trigger them to override their initial answer and make coherent choices. For this reason, researchers try to mask the connection between the two parts of the test by inserting filling questions or a time delay between the two parts of the test. In this research, we explored the extent to which these customarily used masking solutions are effective in increasing test sensitivity for the framing effect. In three experiments, we assessed the effect of masking on the tests of the attribute framing and the risky-choice framing effects. Contradicting the general belief, our results indicate that these effects are already measurable without any masking or delay and we found no convincing evidence that the attempts to decrease task transparency provide worthwhile benefits for general tests of the effect. Beyond their practical relevance, the results question whether the test is a good measure of coherence rationality and better suit those accounts that suggest that the two parts of the framing tasks cannot be regarded as identical. Copyright © 2017 John Wiley & Sons, Ltd

    Lax monitoring versus logical intuition: The determinants of confidence in conjunction fallacy

    No full text
    The general assumption that people fail to notice discrepancy between their answer and the normative answer in the conjunction fallacy task has been challenged by the theory of Logical Intuition. This theory suggests that people can detect the conflict between the heuristic and normative answers even if they do not always manage to inhibit their intuitive choice. This theory gained support from the finding that people report lower levels of confidence in their choice after they commit the conjunction fallacy compared to when their answer is not in conflict with logic. In four experiments we asked the participants to give probability estimations to the options of the conflict and no-conflict versions of the tasks in the original set-up of the experiment or in a three-option design. We found that participants perceive probabilities for the options of the conflict version less similar than for the no-conflict version. As people are less confident when choosing between more similar options, this similarity difference is proposed to serve as a mediator in the task in a way that the conflict and no-conflict conditions have their effects on confidence ratings through manipulating the similarity of the answer options

    Learning and decision making under uncertainty

    Full text link
    How do (and should) people learn and make decisions when they lack knowledge about important elements of the environment? Various forms of this question have been investigated through people’s (hypothetical and real) behavior on monetary gambles. In the present thesis, we asked (broadly) what we can learn about how people learn and make decisions under uncertainty from their often seemingly perplexing behavior in such gambles. We investigated this and related issues in two experimental chapters and a theoretical chapter focused on an experience-based variation of monetary gambles. The main empirical findings can be categorized into three groups. First, we replicated and extended findings of people’s attempts to identify and exploit the temporal structure of such tasks. Most (but not all) participants engaged in such behavior even when outcomes were randomly generated. Second, we investigated the effects of various manipulations of the way in which outcome feedback was provided to the participants. Incentivizing people to attend to more aspects of the feedback had a small but reliably positive effect on people’s performance, while making feedback relevant and diagnostic with respect to participants’ hypotheses improved their performance considerably. Lastly, we provided an in-depth analysis of the often-ignored individual variability on such tasks, which indicated that people often differ in their task-representations quite substantially. Additionally, we outlined a general theoretical framework for how people learn and decide in such contexts based on the metaphor that considers people intuitive scientists (and contrasted it with the currently more popular metaphor that considers people intuitive statisticians). In this framework, we explained the three main results as at least partly the result of people’s background knowledge and the diagnosticity and relevance of feedback for their hypotheses about the experimental environment. We also highlighted the many benefits of this type of explanation for more general questions about how people learn and make decisions under uncertainty
    corecore